#software application security testing
Explore tagged Tumblr posts
Text
Simplify Decentralized Payments with a Unified Cash Collection Application
In a world where financial accountability is non-negotiable, Atcuality provides tools that ensure your field collections are as reliable as your core banking or ERP systems. Designed for enterprises that operate across multiple regions or teams, our cash collection application empowers agents to accept, log, and report payments using just their mobile devices. With support for QR-based transactions, offline syncing, and instant reconciliation, it bridges the gap between field activities and central operations. Managers can monitor performance in real-time, automate reporting, and minimize fraud risks with tamper-proof digital records. Industries ranging from insurance to public sector utilities trust Atcuality to improve revenue assurance and accelerate their collection cycles. With API integrations, role-based access, and custom dashboards, our application becomes the single source of truth for your field finance workflows.
#ai applications#artificial intelligence#augmented and virtual reality market#augmented reality#website development#emailmarketing#information technology#web design#web development#digital marketing#cash collection application#custom software development#custom software services#custom software solutions#custom software company#custom software design#custom application development#custom app development#application development#applications#iot applications#application security#application services#app development#app developers#app developing company#app design#software development#software testing#software company
4 notes
·
View notes
Text
From Crisis to Confidence – Atcuality Restores More Than Just Code
Your website is your digital storefront. When it gets hacked, your brand reputation and customer trust are at stake. Atcuality understands the urgency and emotional toll of such breaches. That’s why our team offers fast-acting, reliable hacked site recovery services that not only fix the problem but prevent it from recurring. We clean your site, identify the source of the attack, and patch every loophole we find. With real-time updates and continuous support, you’ll never feel alone in the recovery process. We go beyond fixing bugs—we educate you about best practices, implement enterprise-grade firewalls, and monitor your website 24/7. Regain control of your site and peace of mind with Atcuality’s recovery experts.
#seo marketing#seo services#artificial intelligence#iot applications#seo agency#azure cloud services#digital marketing#amazon web services#ai powered application#seo company#web design#web development#website#websites#website development#website optimization#website design#website seo#ui ux design#website developer near me#website services#website security#website speed optimization#website developers#web developing company#web developers#software development#software company#software testing#software services
0 notes
Text
Smart Financial Management with Atcuality’s Cash Collection Tools
Businesses need reliable tools to manage their financial operations efficiently. Atcuality offers advanced fintech solutions, including an AI-powered cash collection application that ensures smooth and timely payment collection. This innovative tool helps businesses automate billing, generate instant payment reports, and send notifications to clients, reducing payment delays. With a secure platform that supports multiple payment options, companies can offer a hassle-free experience to customers while maintaining steady cash flow. Whether you’re a startup or an established enterprise, Atcuality’s technology-driven solutions are designed to support your financial goals with ease and security.
#search engine optimisation company#search engine optimisation services#emailmarketing#search engine marketing#digital services#search engine ranking#seo#search engine optimization#digital marketing#seo company#cash collection application#software testing#software company#software engineering#software#software development#developers#applications#opensource#software solutions#software services#software support#software security#software design#software developers#application development#application modernization#application process#app#app development
1 note
·
View note
Text
AI in DevSecOps: Revolutionizing Security Testing and Code Analysis

DevSecOps, short for Development, Security, and Operations, is an approach that integrates security practices within the DevOps workflow. You can think of it as an extra step necessary for integrating security. Before, software development focused on speed and efficiency, often delaying security to the final stages.
However, the rise in cyber threats has made it essential to integrate security into every phase of the software lifecycle. This evolution gave rise to DevSecOps, ensuring that security is not an afterthought but a shared responsibility across teams.
From DevOps to DevSecOps: The Main Goal
The shift from DevOps to DevSecOps emphasizes applying security into continuous integration and delivery (CI/CD) pipelines. The main goal of DevSecOps is to build secure applications by automating security checks. This approach helps in fostering a culture where developers, operations teams, and security experts collaborate seamlessly.
How is AI Reshaping the Security Testing & Code Analysis Industry?
Artificial intelligence and generative AI are transforming the landscape of security testing and code analysis by enhancing precision, speed, and scalability. Before AI took over, manual code reviews and testing were time-consuming and prone to errors. AI-driven solutions, however, automate these processes, enabling real-time vulnerability detection and smarter decision-making.
Let’s look at how AI does that in detail:
AI models analyze code repositories to identify known and unknown vulnerabilities with higher accuracy.
Machine learning algorithms predict potential attack vectors and their impact on applications.
AI tools simulate attacks to assess application resilience, saving time and effort compared to manual testing.
AI ensures code adheres to security and performance standards by analyzing patterns and dependencies.
As you can imagine, there have been several benefits of this:
Reducing False Positives: AI algorithms improve accuracy in identifying real threats.
Accelerating Scans: Traditional methods could take hours, but AI-powered tools perform security scans in minutes.
Self-Learning Capabilities: AI systems evolve based on new data, adapting to emerging threats.
Now that we know about the benefits AI has, let’s look at some challenges AI could pose in security testing & code analysis:
AI systems require large datasets for training, which can expose sensitive information if not properly secured. This could cause disastrous data leaks.
AI models trained on incomplete or biased data may lead to blind spots and errors.
While AI automates many processes, over-reliance can result in missed threats that require human intuition to detect.
Cybercriminals are leveraging AI to create advanced malware that can bypass traditional security measures, posing a new level of risk.
Now that we know the current scenario, let’s look at how AI in DevSecOps will look like in the future:
The Future of AI in DevSecOps
AI’s role in DevSecOps will expand with emerging trends as:
Advanced algorithms will proactively search for threats across networks, to prevent attacks.
Future systems will use AI to detect vulnerabilities and automatically patch them without human intervention.
AI will monitor user and system behavior to identify anomalies, enhancing the detection of unusual activities.
Integrated AI platforms will facilitate seamless communication between development, operations, and security teams for faster decision-making.
AI is revolutionizing DevSecOps by making security testing and code analysis smarter, faster, and more effective. While challenges like data leaks and algorithm bias exist, its potential is much more than the risks it poses.
To learn how our AI-driven solutions can elevate your DevSecOps practices, contact us at Nitor Infotech.
#continuous integration#software development#software testing#engineering devops#applications development#security testing#application security scanning#software services#nitorinfotech#blog#ascendion#gen ai
0 notes
Text
TCoE Framework: Best Practices for Standardized Testing Processes
A Testing Center of Excellence (TCoE) framework focuses on unifying and optimizing testing processes across an organization. By adopting standardized practices, businesses can improve efficiency, consistency, and quality while reducing costs and redundancies.
Define Clear Objectives and Metrics
Set measurable goals for the TCoE, such as improved defect detection rates or reduced test cycle times. Establish key performance indicators (KPIs) to monitor progress and ensure alignment with business objectives.
Adopt a Robust Testing Framework
Use modular and reusable components to create a testing framework that supports both manual and automated testing. Incorporate practices like data-driven and behavior-driven testing to ensure flexibility and scalability.
Leverage the Right Tools and Technologies
Standardize tools for test automation, performance testing, and test management across teams. Integrate AI-driven tools to enhance predictive analytics and reduce test maintenance.
Focus on Skill Development
Provide continuous training to ensure teams stay updated with the latest testing methodologies and technologies. Encourage certifications and cross-functional learning.
Promote Collaboration and Knowledge Sharing
Foster collaboration between development, QA, and operations teams. Establish a knowledge repository for sharing test scripts, results, and best practices.
By implementing these best practices, organizations can build a high-performing TCoE framework that ensures seamless, standardized, and efficient testing processes.
#web automation testing#ui automation testing#web ui testing#ui testing in software testing#automated website testing#web ui automation#web automation tool#web automation software#automated web ui testing#web api test tool#web app testing#web app automation#web app performance testing#security testing for web application
0 notes
Text
Web App Development Services in San Francisco

SleekSky stands out because we're good at creating awesome web apps in San Francisco. We take your ideas and turn them into cool digital solutions using our expertise, creativity, and the latest technologies. Our experienced team works closely with you to understand what you need, making everything personalized. From the start to the finish, SleekSky makes sure the whole process is smooth. We create top-notch web apps that shine in San Francisco. Choose SleekSky to make your online presence stand out with our excellent web app development services in San Francisco, where your ideas meet amazing technology!
Choose our software expertise: https://bit.ly/47HDEZZ
#software development#software development company#software development services#softwaredeveloper#software development solutions#software company#app development company#full stack developer#mobile app development#app development#web app development#web application development#web application testing#web application security#web app developers
0 notes
Text
Unlock Your Potential with the Best Software Testing Course in Ludhiana
Are you ready to elevate your career in the dynamic world of IT? Dive into our comprehensive best Software Testing Course in Ludhiana, Punjab, Moradabad, Delhi, Noida and all cities in India. Designed to empower you with cutting-edge skills and hands-on experience. Our expert-led program ensures a deep understanding of industry-standard testing methodologies, tools, and practices.

Why choose us? We blend theoretical knowledge with real-world scenarios, equipping you for success in today's competitive job market. Gain proficiency in manual and automated testing, explore the nuances of quality assurance, and emerge as a sought-after testing professional.
Join us to enjoy interactive sessions, practical assignments, and personalized mentorship. Don't miss this opportunity to master the art of software testing and open doors to exciting career prospects. Enroll now and take the first step towards a rewarding future!
0 notes
Text
Ethical Hacking and Penetration Testing
In today's digital landscape, the need for robust cybersecurity measures has become paramount. With the rise in cyber threats and attacks, organizations and individuals are constantly seeking ways to protect their valuable data and systems. Two methods that have gained significant attention in recent years are ethical hacking and penetration testing. In this article, we will explore what ethical hacking and penetration testing entail, their importance in safeguarding against cyber threats, and how they differ from each other.
Ethical Hacking
Ethical hacking, also known as white-hat hacking, is the practice of intentionally infiltrating computer systems and networks to identify vulnerabilities before malicious hackers can exploit them. Ethical hackers, often referred to as cybersecurity professionals or penetration testers, use their knowledge and skills to legally and ethically assess the security of an organization's infrastructure. Their goal is to proactively identify weaknesses and vulnerabilities, allowing organizations to patch them up before they can be exploited.
One of the key aspects of ethical hacking is the mindset of thinking like a hacker. Ethical hackers adopt the same techniques and methodologies that malicious hackers use, but with the intention of helping organizations strengthen their security. They employ a range of tools and techniques to conduct penetration testing, including vulnerability scanning, network mapping, and social engineering. By simulating real-world attack scenarios, ethical hackers can uncover vulnerabilities that might otherwise go unnoticed.
Ethical hacking plays a crucial role in ensuring the security and integrity of computer systems. It enables organizations to identify and address potential weaknesses in their infrastructure, preventing unauthorized access and data breaches. By proactively seeking out vulnerabilities, ethical hackers help organizations stay one step ahead of cybercriminals.
The Growing Importance of Ethical Hacking
In a world where cyberattacks are becoming increasingly sophisticated, ethical hacking plays a pivotal role in safeguarding sensitive information and critical infrastructure. Businesses, governments, and organizations across the globe recognize the value of proactive cybersecurity measures.
Ethical hackers and penetration testers are in high demand, with career opportunities spanning various industries, including finance, healthcare, and tech. Their skills are crucial for fortifying defenses, ensuring compliance with data protection regulations, and ultimately, preserving trust in the digital world.
The Intricate Art of Penetration Testing
Penetration Testing, often referred to as pen testing, is a subset of ethical hacking. It involves actively simulating cyberattacks to evaluate the security of a system, network, or application. Penetration testers employ a systematic approach, mimicking the tactics of real attackers to identify vulnerabilities and assess the potential impact of an attack.
There are different types of penetration testing, such as network penetration testing, application penetration testing, and social engineering testing. Each type focuses on specific areas of an organization's infrastructure and helps uncover vulnerabilities that may have been overlooked. By conducting regular penetration tests, organizations can ensure that their security measures are effective and up to date.
Conclusion
In the ever-evolving world of cybersecurity, ethical hacking and penetration testing are indispensable tools for organizations and individuals seeking to safeguard their digital assets. Ethical hacking allows organizations to proactively identify vulnerabilities and strengthen their security measures before malicious hackers can exploit them. Penetration testing goes a step further by simulating real-world attacks to assess the effectiveness of existing security measures.
#cyber security#ethicalhacking#penetration testing#hiteshi technologies#mobile app development company#mobile app development services#iphone application development company#android application development company#technology#custom software development
1 note
·
View note
Text
Conspiratorialism as a material phenomenon

I'll be in TUCSON, AZ from November 8-10: I'm the GUEST OF HONOR at the TUSCON SCIENCE FICTION CONVENTION.
I think it behooves us to be a little skeptical of stories about AI driving people to believe wrong things and commit ugly actions. Not that I like the AI slop that is filling up our social media, but when we look at the ways that AI is harming us, slop is pretty low on the list.
The real AI harms come from the actual things that AI companies sell AI to do. There's the AI gun-detector gadgets that the credulous Mayor Eric Adams put in NYC subways, which led to 2,749 invasive searches and turned up zero guns:
https://www.cbsnews.com/newyork/news/nycs-subway-weapons-detector-pilot-program-ends/
Any time AI is used to predict crime – predictive policing, bail determinations, Child Protective Services red flags – they magnify the biases already present in these systems, and, even worse, they give this bias the veneer of scientific neutrality. This process is called "empiricism-washing," and you know you're experiencing it when you hear some variation on "it's just math, math can't be racist":
https://pluralistic.net/2020/06/23/cryptocidal-maniacs/#phrenology
When AI is used to replace customer service representatives, it systematically defrauds customers, while providing an "accountability sink" that allows the company to disclaim responsibility for the thefts:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
When AI is used to perform high-velocity "decision support" that is supposed to inform a "human in the loop," it quickly overwhelms its human overseer, who takes on the role of "moral crumple zone," pressing the "OK" button as fast as they can. This is bad enough when the sacrificial victim is a human overseeing, say, proctoring software that accuses remote students of cheating on their tests:
https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat
But it's potentially lethal when the AI is a transcription engine that doctors have to use to feed notes to a data-hungry electronic health record system that is optimized to commit health insurance fraud by seeking out pretenses to "upcode" a patient's treatment. Those AIs are prone to inventing things the doctor never said, inserting them into the record that the doctor is supposed to review, but remember, the only reason the AI is there at all is that the doctor is being asked to do so much paperwork that they don't have time to treat their patients:
https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14
My point is that "worrying about AI" is a zero-sum game. When we train our fire on the stuff that isn't important to the AI stock swindlers' business-plans (like creating AI slop), we should remember that the AI companies could halt all of that activity and not lose a dime in revenue. By contrast, when we focus on AI applications that do the most direct harm – policing, health, security, customer service – we also focus on the AI applications that make the most money and drive the most investment.
AI hasn't attracted hundreds of billions in investment capital because investors love AI slop. All the money pouring into the system – from investors, from customers, from easily gulled big-city mayors – is chasing things that AI is objectively very bad at and those things also cause much more harm than AI slop. If you want to be a good AI critic, you should devote the majority of your focus to these applications. Sure, they're not as visually arresting, but discrediting them is financially arresting, and that's what really matters.
All that said: AI slop is real, there is a lot of it, and just because it doesn't warrant priority over the stuff AI companies actually sell, it still has cultural significance and is worth considering.
AI slop has turned Facebook into an anaerobic lagoon of botshit, just the laziest, grossest engagement bait, much of it the product of rise-and-grind spammers who avidly consume get rich quick "courses" and then churn out a torrent of "shrimp Jesus" and fake chainsaw sculptures:
https://www.404media.co/email/1cdf7620-2e2f-4450-9cd9-e041f4f0c27f/
For poor engagement farmers in the global south chasing the fractional pennies that Facebook shells out for successful clickbait, the actual content of the slop is beside the point. These spammers aren't necessarily tuned into the psyche of the wealthy-world Facebook users who represent Meta's top monetization subjects. They're just trying everything and doubling down on anything that moves the needle, A/B splitting their way into weird, hyper-optimized, grotesque crap:
https://www.404media.co/facebook-is-being-overrun-with-stolen-ai-generated-images-that-people-think-are-real/
In other words, Facebook's AI spammers are laying out a banquet of arbitrary possibilities, like the letters on a Ouija board, and the Facebook users' clicks and engagement are a collective ideomotor response, moving the algorithm's planchette to the options that tug hardest at our collective delights (or, more often, disgusts).
So, rather than thinking of AI spammers as creating the ideological and aesthetic trends that drive millions of confused Facebook users into condemning, praising, and arguing about surreal botshit, it's more true to say that spammers are discovering these trends within their subjects' collective yearnings and terrors, and then refining them by exploring endlessly ramified variations in search of unsuspected niches.
(If you know anything about AI, this may remind you of something: a Generative Adversarial Network, in which one bot creates variations on a theme, and another bot ranks how closely the variations approach some ideal. In this case, the spammers are the generators and the Facebook users they evince reactions from are the discriminators)
https://en.wikipedia.org/wiki/Generative_adversarial_network
I got to thinking about this today while reading User Mag, Taylor Lorenz's superb newsletter, and her reporting on a new AI slop trend, "My neighbor’s ridiculous reason for egging my car":
https://www.usermag.co/p/my-neighbors-ridiculous-reason-for
The "egging my car" slop consists of endless variations on a story in which the poster (generally a figure of sympathy, canonically a single mother of newborn twins) complains that her awful neighbor threw dozens of eggs at her car to punish her for parking in a way that blocked his elaborate Hallowe'en display. The text is accompanied by an AI-generated image showing a modest family car that has been absolutely plastered with broken eggs, dozens upon dozens of them.
According to Lorenz, variations on this slop are topping very large Facebook discussion forums totalling millions of users, like "Movie Character…,USA Story, Volleyball Women, Top Trends, Love Style, and God Bless." These posts link to SEO sites laden with programmatic advertising.
The funnel goes:
i. Create outrage and hence broad reach;
ii, A small percentage of those who see the post will click through to the SEO site;
iii. A small fraction of those users will click a low-quality ad;
iv. The ad will pay homeopathic sub-pennies to the spammer.
The revenue per user on this kind of scam is next to nothing, so it only works if it can get very broad reach, which is why the spam is so designed for engagement maximization. The more discussion a post generates, the more users Facebook recommends it to.
These are very effective engagement bait. Almost all AI slop gets some free engagement in the form of arguments between users who don't know they're commenting an AI scam and people hectoring them for falling for the scam. This is like the free square in the middle of a bingo card.
Beyond that, there's multivalent outrage: some users are furious about food wastage; others about the poor, victimized "mother" (some users are furious about both). Not only do users get to voice their fury at both of these imaginary sins, they can also argue with one another about whether, say, food wastage even matters when compared to the petty-minded aggression of the "perpetrator." These discussions also offer lots of opportunity for violent fantasies about the bad guy getting a comeuppance, offers to travel to the imaginary AI-generated suburb to dole out a beating, etc. All in all, the spammers behind this tedious fiction have really figured out how to rope in all kinds of users' attention.
Of course, the spammers don't get much from this. There isn't such a thing as an "attention economy." You can't use attention as a unit of account, a medium of exchange or a store of value. Attention – like everything else that you can't build an economy upon, such as cryptocurrency – must be converted to money before it has economic significance. Hence that tooth-achingly trite high-tech neologism, "monetization."
The monetization of attention is very poor, but AI is heavily subsidized or even free (for now), so the largest venture capital and private equity funds in the world are spending billions in public pension money and rich peoples' savings into CO2 plumes, GPUs, and botshit so that a bunch of hustle-culture weirdos in the Pacific Rim can make a few dollars by tricking people into clicking through engagement bait slop – twice.
The slop isn't the point of this, but the slop does have the useful function of making the collective ideomotor response visible and thus providing a peek into our hopes and fears. What does the "egging my car" slop say about the things that we're thinking about?
Lorenz cites Jamie Cohen, a media scholar at CUNY Queens, who points out that subtext of this slop is "fear and distrust in people about their neighbors." Cohen predicts that "the next trend, is going to be stranger and more violent.”
This feels right to me. The corollary of mistrusting your neighbors, of course, is trusting only yourself and your family. Or, as Margaret Thatcher liked to say, "There is no such thing as society. There are individual men and women and there are families."
We are living in the tail end of a 40 year experiment in structuring our world as though "there is no such thing as society." We've gutted our welfare net, shut down or privatized public services, all but abolished solidaristic institutions like unions.
This isn't mere aesthetics: an atomized society is far more hospitable to extreme wealth inequality than one in which we are all in it together. When your power comes from being a "wise consumer" who "votes with your wallet," then all you can do about the climate emergency is buy a different kind of car – you can't build the public transit system that will make cars obsolete.
When you "vote with your wallet" all you can do about animal cruelty and habitat loss is eat less meat. When you "vote with your wallet" all you can do about high drug prices is "shop around for a bargain." When you vote with your wallet, all you can do when your bank forecloses on your home is "choose your next lender more carefully."
Most importantly, when you vote with your wallet, you cast a ballot in an election that the people with the thickest wallets always win. No wonder those people have spent so long teaching us that we can't trust our neighbors, that there is no such thing as society, that we can't have nice things. That there is no alternative.
The commercial surveillance industry really wants you to believe that they're good at convincing people of things, because that's a good way to sell advertising. But claims of mind-control are pretty goddamned improbable – everyone who ever claimed to have managed the trick was lying, from Rasputin to MK-ULTRA:
https://pluralistic.net/HowToDestroySurveillanceCapitalism
Rather than seeing these platforms as convincing people of things, we should understand them as discovering and reinforcing the ideology that people have been driven to by material conditions. Platforms like Facebook show us to one another, let us form groups that can imperfectly fill in for the solidarity we're desperate for after 40 years of "no such thing as society."
The most interesting thing about "egging my car" slop is that it reveals that so many of us are convinced of two contradictory things: first, that everyone else is a monster who will turn on you for the pettiest of reasons; and second, that we're all the kind of people who would stick up for the victims of those monsters.
Tor Books as just published two new, free LITTLE BROTHER stories: VIGILANT, about creepy surveillance in distance education; and SPILL, about oil pipelines and indigenous landback.

If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/10/29/hobbesian-slop/#cui-bono
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#taylor lorenz#conspiratorialism#conspiracy fantasy#mind control#a paradise built in hell#solnit#ai slop#ai#disinformation#materialism#doppelganger#naomi klein
308 notes
·
View notes
Text
Ever since OpenAI released ChatGPT at the end of 2022, hackers and security researchers have tried to find holes in large language models (LLMs) to get around their guardrails and trick them into spewing out hate speech, bomb-making instructions, propaganda, and other harmful content. In response, OpenAI and other generative AI developers have refined their system defenses to make it more difficult to carry out these attacks. But as the Chinese AI platform DeepSeek rockets to prominence with its new, cheaper R1 reasoning model, its safety protections appear to be far behind those of its established competitors.
Today, security researchers from Cisco and the University of Pennsylvania are publishing findings showing that, when tested with 50 malicious prompts designed to elicit toxic content, DeepSeek’s model did not detect or block a single one. In other words, the researchers say they were shocked to achieve a “100 percent attack success rate.”
The findings are part of a growing body of evidence that DeepSeek’s safety and security measures may not match those of other tech companies developing LLMs. DeepSeek’s censorship of subjects deemed sensitive by China’s government has also been easily bypassed.
“A hundred percent of the attacks succeeded, which tells you that there’s a trade-off,” DJ Sampath, the VP of product, AI software and platform at Cisco, tells WIRED. “Yes, it might have been cheaper to build something here, but the investment has perhaps not gone into thinking through what types of safety and security things you need to put inside of the model.”
Other researchers have had similar findings. Separate analysis published today by the AI security company Adversa AI and shared with WIRED also suggests that DeepSeek is vulnerable to a wide range of jailbreaking tactics, from simple language tricks to complex AI-generated prompts.
DeepSeek, which has been dealing with an avalanche of attention this week and has not spoken publicly about a range of questions, did not respond to WIRED’s request for comment about its model’s safety setup.
Generative AI models, like any technological system, can contain a host of weaknesses or vulnerabilities that, if exploited or set up poorly, can allow malicious actors to conduct attacks against them. For the current wave of AI systems, indirect prompt injection attacks are considered one of the biggest security flaws. These attacks involve an AI system taking in data from an outside source—perhaps hidden instructions of a website the LLM summarizes—and taking actions based on the information.
Jailbreaks, which are one kind of prompt-injection attack, allow people to get around the safety systems put in place to restrict what an LLM can generate. Tech companies don’t want people creating guides to making explosives or using their AI to create reams of disinformation, for example.
Jailbreaks started out simple, with people essentially crafting clever sentences to tell an LLM to ignore content filters—the most popular of which was called “Do Anything Now” or DAN for short. However, as AI companies have put in place more robust protections, some jailbreaks have become more sophisticated, often being generated using AI or using special and obfuscated characters. While all LLMs are susceptible to jailbreaks, and much of the information could be found through simple online searches, chatbots can still be used maliciously.
“Jailbreaks persist simply because eliminating them entirely is nearly impossible—just like buffer overflow vulnerabilities in software (which have existed for over 40 years) or SQL injection flaws in web applications (which have plagued security teams for more than two decades),” Alex Polyakov, the CEO of security firm Adversa AI, told WIRED in an email.
Cisco’s Sampath argues that as companies use more types of AI in their applications, the risks are amplified. “It starts to become a big deal when you start putting these models into important complex systems and those jailbreaks suddenly result in downstream things that increases liability, increases business risk, increases all kinds of issues for enterprises,” Sampath says.
The Cisco researchers drew their 50 randomly selected prompts to test DeepSeek’s R1 from a well-known library of standardized evaluation prompts known as HarmBench. They tested prompts from six HarmBench categories, including general harm, cybercrime, misinformation, and illegal activities. They probed the model running locally on machines rather than through DeepSeek’s website or app, which send data to China.
Beyond this, the researchers say they have also seen some potentially concerning results from testing R1 with more involved, non-linguistic attacks using things like Cyrillic characters and tailored scripts to attempt to achieve code execution. But for their initial tests, Sampath says, his team wanted to focus on findings that stemmed from a generally recognized benchmark.
Cisco also included comparisons of R1’s performance against HarmBench prompts with the performance of other models. And some, like Meta’s Llama 3.1, faltered almost as severely as DeepSeek’s R1. But Sampath emphasizes that DeepSeek’s R1 is a specific reasoning model, which takes longer to generate answers but pulls upon more complex processes to try to produce better results. Therefore, Sampath argues, the best comparison is with OpenAI’s o1 reasoning model, which fared the best of all models tested. (Meta did not immediately respond to a request for comment).
Polyakov, from Adversa AI, explains that DeepSeek appears to detect and reject some well-known jailbreak attacks, saying that “it seems that these responses are often just copied from OpenAI’s dataset.” However, Polyakov says that in his company’s tests of four different types of jailbreaks—from linguistic ones to code-based tricks—DeepSeek’s restrictions could easily be bypassed.
“Every single method worked flawlessly,” Polyakov says. “What’s even more alarming is that these aren’t novel ‘zero-day’ jailbreaks—many have been publicly known for years,” he says, claiming he saw the model go into more depth with some instructions around psychedelics than he had seen any other model create.
“DeepSeek is just another example of how every model can be broken—it’s just a matter of how much effort you put in. Some attacks might get patched, but the attack surface is infinite,” Polyakov adds. “If you’re not continuously red-teaming your AI, you’re already compromised.”
57 notes
·
View notes
Text
The cogent documentary, “Surveilled,” now available on HBO, tracks journalist Ronan Farrow as he investigates the proliferation and implementation of spyware, specifically, Pegasus, which was created by the Israeli company NSO Group. The company sells its product to clients who use it to fight crime and terrorism. It is claimed that Pegasus was instrumental in helping capture Mexican drug lord, Joaquín “El Chapo” Guzman. However, there are also reports that NSO’s products are being used to target journalists, human rights activists and political dissidents.
. . .
Farrow: I put up a piece in The New Yorker this week. It was fascinating to talk to experts in the privacy law space who are really in a high state of alarm right now. The United States, under administrations from both parties, has flirted with this technology in ways that is alarming. Under the first Trump administration, they bought Pegasus. They claimed they were buying it to test it and see what our enemies were doing, and The New York Times later sued them for more information and found really persuasive evidence that the FBI wanted to operationalize that in American law enforcement investigations.
youtube
In September, the Department of Homeland Security (D.H.S.) signed a two-million-dollar contract with Paragon, an Israeli firm whose spyware product Graphite focusses on breaching encrypted-messaging applications such as Telegram and Signal. Wired first reported that the technology was acquired by Immigration and Customs Enforcement (ICE)—an agency within D.H.S. that will soon be involved in executing the Trump Administration’s promises of mass deportations and crackdowns on border crossings. A source at Paragon told me that the deal followed a vetting process, during which the company was able to demonstrate that it had robust tools to prevent other countries that purchase its spyware from hacking Americans—but that wouldn’t limit the U.S. government’s ability to target its own citizens. The technology is part of a booming multibillion-dollar market for intrusive phone-hacking software that is making government surveillance increasingly cheap and accessible. In recent years, a number of Western democracies have been roiled by controversies in which spyware has been used, apparently by defense and intelligence agencies, to target opposition politicians, journalists, and apolitical civilians caught up in Orwellian surveillance dragnets.
Now Donald Trump and incoming members of his Administration will decide whether to curtail or expand the U.S. government’s use of this kind of technology. Privacy advocates have been in a state of high alarm about the colliding political and technological trend lines.
“It’s just so evident—the impending disaster,” Emily Tucker, the executive director at the Center on Privacy and Technology at Georgetown Law, told me. “You may believe yourself not to be in one of the vulnerable categories, but you won’t know if you’ve ended up on a list for some reason or your loved ones have. Every single person should be worried.”
40 notes
·
View notes
Text
Secure, Scalable, and Built for the Field: Atcuality Delivers
Atcuality is a technology partner focused on solving complex operational challenges with smart, mobile-based business tools. Whether you need to digitize reporting, track transactions, or reduce cash handling risks, our products are engineered with flexibility and performance in mind. Our cash collection application is trusted by logistics and field-service teams across industries to simplify collections and strengthen financial accountability. Key features include instant receipt generation, GPS verification, automated daily summaries, and bank reconciliation support—all accessible from any Android device. With real-time dashboards and customizable workflows, it turns every delivery or collection point into a transparent, auditable node in your finance system. Trust Atcuality to help your business operate faster, safer, and smarter—right from the ground up.
#artificial intelligence#ai applications#augmented and virtual reality market#digital marketing#emailmarketing#augmented reality#web development#website development#web design#information technology#website optimization#website#websites#web developing company#web developers#website security#website design#website services#ui ux design#wordpress#wordpress development#webdesign#digital services#digital consulting#software development#software testing#software company#software services#machine learning#software engineering
0 notes
Text
Revolutionize Your Receivables with Atcuality’s Collection Platform
Struggling with outdated manual collection processes? Atcuality’s comprehensive cash collection application provides everything your business needs to streamline payment collection and reconciliation. Our feature-rich platform supports real-time monitoring, customizable workflows, multi-currency support, and advanced security features. Designed to empower field agents and finance managers alike, our application reduces operational overhead while improving transparency and accountability. Seamless integration with ERP systems ensures smooth data flow across your organization. From retail networks to field services and utility providers, businesses trust Atcuality to simplify collections and boost cash flow. Partner with us to modernize your operations, improve customer satisfaction, and drive sustainable growth. Experience digital transformation with Atcuality.
#seo marketing#digital marketing#artificial intelligence#iot applications#seo agency#seo services#azure cloud services#seo company#amazon web services#ai powered application#cash collection application#software engineering#software testing#software company#software services#information technology#software development#technology#software#software consulting#applications#application development#mobile application development#ai applications#application security#application modernization#application process#application services#app design#app developers
1 note
·
View note
Text
Innovative Solutions for Seamless Digital Transformation - Atcuality
Atcuality crafts tailor-made digital solutions that help businesses stay ahead in a competitive marketplace. With a team of experienced developers, designers, and digital marketers, we specialize in creating functional websites, high-performing mobile apps, AI-driven platforms, and immersive AR/VR experiences. Our solutions are built with precision and creativity to meet the unique needs of each client. One of our core services is Telegram Bot Creation, designed to automate customer engagement, streamline communication, and deliver instant support through highly interactive bots. These bots not only handle user queries but also execute complex tasks like sending alerts, integrating APIs, and providing real-time updates, making them invaluable for businesses seeking efficiency. Additionally, Atcuality offers digital marketing strategies, SEO optimization, and blockchain development services to ensure holistic digital growth. Choose Atcuality to empower your business with innovative technologies and expert-driven strategies.

#search engine marketing#emailmarketing#search engine optimisation company#search engine optimization#search engine ranking#seo#digital services#search engine optimisation services#digital marketing#seo company#telegram#telegram channel#applications#software#digital transformation#technology#software testing#software company#software engineering#software development#software solutions#software security#software services#software support#app development#app developers#app developing company#cash collection application#application modernization#application development
0 notes
Text
It's always "funny" to remember that software development as field often operates on the implicit and completely unsupported assumption that security bugs are fixed faster than they are introduced, adjusting for security bug severity.
This assumption is baked into security policies that are enforced at the organizational level regardless of whether they are locally good ideas or not. So you have all sorts of software updating basically automatically and this is supposedly proof that you deserve that SOC2 certification.
Different companies have different incentives. There are two main incentives:
Limiting legal liability
Improving security outcomes for users
Most companies have an overwhelming proportion of the first incentive.
This would be closer to OK if people were more honest about it, but even within a company they often start developing The Emperor's New Clothes types of behaviour.
---
I also suspect that security has generally been a convenient scapegoat to justify annoying, intrusive and outright abusive auto-updating practices in consumer software. "Nevermind when we introduced that critical security bug and just update every day for us, alright??"
Product managers almost always want every user to be on the latest version, for many reasons of varying coherence. For example, it enables A/B testing (provided your software doesn't just silently hotpatch it without your consent anyway).
---
I bring this up because (1) I felt like it, (2) there are a lot of not-so-well-supported assumptions in this field, which are mainly propagated for unrelated reasons. Companies will try to select assumptions that suit them.
Yes, if someone does software development right, the software should converge towards being more secure as it gets more updates. But the reality is that libraries and applications are heavily heterogenous -- they have different risk profiles, different development practices, different development velocities, and different tooling. The correct policy is more complicated and contextual.
Corporate incentives taint the field epistemologically. There's a general desire to confuse what is good for the corporation with what is good for users with what is good for the field.
The way this happens isn't by proposing obviously insane practices, but by taking things that sound maybe-reasonable and artificially amplifying confidence levels. There are aspects of the distortion that are obvious and aspects of the distortion that are most subtle. If you're on the inside and never talked to weird FOSS people, it's easy to find it normal.
One of the eternal joys and frustrations of being a software developer is trying to have effective knowledge about software development. And generally a pre-requisite to that is not believing false things.
For all the bullshit that goes on in the field, I feel _good_ about being able to form my own opinions. The situation, roughly speaking, is not rosy, but learning to derive some enjoyment from countering harmful and incorrect beliefs is a good adaptation. If everyone with a clue becomes miserable and frustrated then computing is doomed. So my first duty is to myself -- to talk about such things without being miserable. I tend to do a pretty okay job at that.
#i know to some of you i'm just stating the sky is blue#software#computing#security#anpost#this was an anramble at first but i just kept writing i guess#still kind of a ramble
51 notes
·
View notes
Text
NEW DELHI (Reuters) -Global makers of surveillance gear have clashed with Indian regulators in recent weeks over contentious new security rules that require manufacturers of CCTV cameras to submit hardware, software and source code for assessment in government labs, official documents and company emails show.
The security-testing policy has sparked industry warnings of supply disruptions and added to a string of disputes between Prime Minister Narendra Modi's administration and foreign companies over regulatory issues and what some perceive as protectionism.
New Delhi's approach is driven in part by its alarm about China's sophisticated surveillance capabilities, according to a top Indian official involved in the policymaking. In 2021, Modi's then-junior IT minister told parliament that 1 million cameras in government institutions were from Chinese companies and there were vulnerabilities with video data transferred to servers abroad.
Under the new requirements applicable from April, manufacturers such as China's Hikvision, Xiaomi and Dahua, South Korea's Hanwha, and Motorola Solutions of the U.S. must submit cameras for testing by Indian government labs before they can sell them in the world's most populous nation. The policy applies to all internet-connected CCTV models made or imported since April 9.
"There's always an espionage risk," Gulshan Rai, India's cybersecurity chief from 2015 to 2019, told Reuters. "Anyone can operate and control internet-connected CCTV cameras sitting in an adverse location. They need to be robust and secure."
Indian officials met on April 3 with executives of 17 foreign and domestic makers of surveillance gear, including Hanwha, Motorola, Bosch, Honeywell and Xiaomi, where many of the manufacturers said they weren't ready to meet the certification rules and lobbied unsuccessfully for a delay, according to the official minutes.
In rejecting the request, the government said India's policy "addresses a genuine security issue" and must be enforced, the minutes show.
India said in December the CCTV rules, which do not single out any country by name, aimed to "enhance the quality and cybersecurity of surveillance systems in the country."
This report is based on a Reuters review of dozens of documents, including records of meetings and emails between manufacturers and Indian IT ministry officials, and interviews with six people familiar with India's drive to scrutinize the technology. The interactions haven't been previously reported.
Insufficient testing capacity, drawn-out factory inspections and government scrutiny of sensitive source code were among key issues camera makers said had delayed approvals and risked disrupting unspecified infrastructure and commercial projects.
"Millions of dollars will be lost from the industry, sending tremors through the market," Ajay Dubey, Hanwha's director for South Asia, told India's IT ministry in an email on April 9.
The IT ministry and most of the companies identified by Reuters didn't respond to requests for comment about the discussions and the impact of the testing policy. The ministry told the executives on April 3 that it may consider accrediting more testing labs.
Millions of CCTV cameras have been installed across Indian cities, offices and residential complexes in recent years to enhance security monitoring. New Delhi has more than 250,000 cameras, according to official data, mostly mounted on poles in key locations.
The rapid take-up is set to bolster India's surveillance camera market to $7 billion by 2030, from $3.5 billion last year, Counterpoint Research analyst Varun Gupta told Reuters.
China's Hikvision and Dahua account for 30% of the market, while India's CP Plus has a 48% share, Gupta said, adding that some 80% of all CCTV components are from China.
Hanwha, Motorola Solutions and Britain's Norden Communication told officials by email in April that just a fraction of the industry's 6,000 camera models had approvals under the new rules.
CHINA CONCERN
The U.S. in 2022 banned sales of Hikvision and Dahua equipment, citing national security risks. Britain and Australia have also restricted China-made devices.
Likewise, with CCTV cameras, India "has to ensure there are checks on what is used in these devices, what chips are going in," the senior Indian official told Reuters. "China is part of the concern."
China's state security laws require organizations to cooperate with intelligence work.
Reuters reported this month that unexplained communications equipment had been found in some Chinese solar power inverters by U.S. experts who examined the products.
Since 2020, when Indian and Chinese forces clashed at their border, India has banned dozens of Chinese-owned apps, including TikTok, on national security grounds. India also tightened foreign investment rules for countries with which it shares a land border.
The remote detonation of pagers in Lebanon last year, which Reuters reported was executed by Israeli operatives targeting Hezbollah, further galvanized Indian concerns about the potential abuse of tech devices and the need to quickly enforce testing of CCTV equipment, the senior Indian official said.
The camera-testing rules don't contain a clause about land borders.
But last month, China's Xiaomi said that when it applied for testing of CCTV devices, Indian officials told the company the assessment couldn't proceed because "internal guidelines" required Xiaomi to supply more registration details of two of its China-based contract manufacturers.
"The testing lab indicated that this requirement applies to applications originating from countries that share a land border with India," the company wrote in an April 24 email to the Indian agency that oversees lab testing.
Xiaomi didn't respond to Reuters queries, and the IT ministry didn't address questions about the company's account.
China's foreign ministry told Reuters it opposes the "generalization of the concept of national security to smear and suppress Chinese companies," and hoped India would provide a non-discriminatory environment for Chinese firms.
LAB TESTING, FACTORY VISITS
While CCTV equipment supplied to India's government has had to undergo testing since June 2024, the widening of the rules to all devices has raised the stakes.
The public sector accounts for 27% of CCTV demand in India, and enterprise clients, industry, hospitality firms and homes the remaining 73%, according to Counterpoint.
The rules require CCTV cameras to have tamper-proof enclosures, strong malware detection and encryption.
Companies need to run software tools to test source code and provide reports to government labs, two camera industry executives said.
The rules allow labs to ask for source code if companies are using proprietary communication protocols in devices, rather than standard ones like Wi-Fi. They also enable Indian officials to visit device makers abroad and inspect facilities for cyber vulnerabilities.
The Indian unit of China's Infinova told IT ministry officials last month the requirements were creating challenges.
"Expectations such as source code sharing, retesting post firmware upgrades, and multiple factory audits significantly impact internal timelines," Infinova sales executive Sumeet Chanana said in an email on April 10. Infinova didn't respond to Reuters questions.
The same day, Sanjeev Gulati, India director for Taiwan-based Vivotek, warned Indian officials that "All ongoing projects will go on halt." He told Reuters this month that Vivotek had submitted product applications and hoped "to get clearance soon."
The body that examines surveillance gear is India's Standardization Testing and Quality Certification Directorate, which comes under the IT ministry. The agency has 15 labs that can review 28 applications concurrently, according to data on its website that was removed after Reuters sent questions. Each application can include up to 10 models.
As of May 28, 342 applications for hundreds of models from various manufacturers were pending, official data showed. Of those, 237 were classified as new, with 142 lodged since the April 9 deadline.
Testing had been completed on 35 of those applications, including just one from a foreign company.
India's CP Plus told Reuters it had received clearance for its flagship cameras but several more models were awaiting certification.
Bosch said it too had submitted devices for testing, but asked that Indian authorities "allow business continuity" for those products until the process is completed.
When Reuters visited New Delhi's bustling Nehru Place electronics market last week, shelves were stacked with popular CCTV cameras from Hikvision, Dahua and CP Plus.
But Sagar Sharma said revenue at his CCTV retail shop had plunged about 50% this month from April because of the slow pace of government approvals for security cameras.
"It is not possible right now to cater to big orders," he said. "We have to survive with the stock we have."
7 notes
·
View notes